联合超分辨率和反音调映射(SR-ITM)旨在提高具有分辨率和动态范围具有质量缺陷的视频的视觉质量。当使用4K高动态范围(HDR)电视来观看低分辨率标准动态范围(LR SDR)视频时,就会出现此问题。以前依赖于学习本地信息的方法通常在保留颜色合规性和远程结构相似性方面做得很好,从而导致了不自然的色彩过渡和纹理伪像。为了应对这些挑战,我们建议联合SR-ITM的全球先验指导的调制网络(GPGMNET)。特别是,我们设计了一个全球先验提取模块(GPEM),以提取颜色合规性和结构相似性,分别对ITM和SR任务有益。为了进一步利用全球先验并保留空间信息,我们使用一些用于中间特征调制的参数,设计多个全球先验的指导空间调制块(GSMB),其中调制参数由共享的全局先验和空间特征生成来自空间金字塔卷积块(SPCB)的地图。通过这些精心设计的设计,GPGMNET可以通过较低的计算复杂性实现更高的视觉质量。广泛的实验表明,我们提出的GPGMNET优于最新方法。具体而言,我们提出的模型在PSNR中超过了0.64 dB的最新模型,其中69 $ \%$ $ $较少,3.1 $ \ times $ speedup。该代码将很快发布。
translated by 谷歌翻译
通过将逻辑推理与可区分的操作员近似逻辑推理来整合逻辑推理和机器学习是神经符号系统中广泛使用的技术。但是,一些可区分的操作员可能会在反向传播过程中带来明显的偏见,并降低神经符号学习的表现。在本文中,我们揭示了这种偏见,称为\ textit {含义偏见}在源自模糊逻辑运算符的损失函数中很常见。此外,我们提出了一种简单而有效的方法,将偏见的损失函数转换为\ textit {减少含义偏见逻辑损失(RILL)}以解决上述问题。实证研究表明,与偏见的逻辑损失函数相比,RILL可以取得重大改进,尤其是当知识库不完整时,并且在标记数据不足时比较的方法更强大。
translated by 谷歌翻译
过去,图像检索是用于跨视图地理位置和无人机视觉本地化任务的主流解决方案。简而言之,图像检索的方式是通过过渡角度获得最终所需的信息,例如GPS。但是,图像检索的方式并非完全端到端。并且有一些多余的操作,例如需要提前准备功能库以及画廊构造的抽样间隔问题,这使得很难实施大规模应用程序。在本文中,我们提出了一个端到端定位方案,使用图像(FPI)查找点,该方案旨在通过源A的图像(无人机 - - 看法)。为了验证我们的框架的可行性,我们构建了一个新的数据集(UL14),该数据集旨在解决无人机视觉自我定位任务。同时,我们还建立了一个基于变压器的基线以实现端到端培训。另外,先前的评估方法不再适用于FPI框架。因此,提出了米级准确性(MA)和相对距离评分(RDS)来评估无人机定位的准确性。同时,我们初步比较了FPI和图像检索方法,而FPI的结构在速度和效率方面都可以提高性能。特别是,由于不同观点与剧烈的空间量表转换之间的巨大差异,FPI的任务仍然是巨大的挑战。
translated by 谷歌翻译
预计机器学习算法的大多数实际问题都可以通过1)未知数据分配来解决这种情况; 2)小领域特定知识; 3)注释有限的数据集。我们通过使用潜在变量(NPC-LV)的压缩提出非参数学习,这是任何数据集的学习框架,这些数据集具有丰富的未标记数据,但很少有标签的数据。通过仅以无监督的方式训练生成模型,该框架利用数据分配来构建压缩机。使用源自Kolmogorov复杂性的基于压缩机的距离度量,加上很少的标记数据,NPC-LV无需进一步的训练而进行分类。我们表明,在低数据制度中,NPC-LV在图像分类的所有三个数据集上都优于监督方法,甚至超过了CIFAR-10上的半监督学习方法。我们证明了如何以及何时使用负面证据下降(Nelbo)作为分类的近似压缩长度。通过揭示压缩率和分类精度之间的相关性,我们说明在NPC-LV下,生成模型的改进可以增强下游分类精度。
translated by 谷歌翻译
本文提出了解码器 - 侧交叉分辨率合成(CRS)模块,以追求更好的压缩效率超出最新的通用视频编码(VVC),在那里我们在原始高分辨率(HR)处编码帧内帧,以较低的分辨率压缩帧帧间( LR),然后通过在先前的HR帧内和相邻的LR帧间帧内解解码LR帧间帧间帧帧。对于LR帧间帧,设计运动对准和聚合网络(MAN)以产生时间汇总的运动表示,以最佳保证时间平滑度;使用另一个纹理补偿网络(TCN)来生成从解码的HR帧内帧的纹理表示,以便更好地增强空间细节;最后,相似性驱动的融合引擎将运动和纹理表示合成为Upscale LR帧帧,以便去除压缩和分辨率重新采样噪声。我们使用所提出的CRS增强VVC,显示平均为8.76%和11.93%BJ {\ O} NTEGAARD Delta率(BD速率)分别在随机接入(RA)和低延延迟P(LDP)设置中的最新VVC锚点。此外,对基于最先进的超分辨率(SR)的VVC增强方法和消融研究的实验比较,进一步报告了所提出的算法的卓越效率和泛化。所有材料都将在HTTPS://njuvision.github.io /crs上公开进行可重复的研究。
translated by 谷歌翻译
We aim to bridge the gap between our common-sense few-sample human learning and large-data machine learning. We derive a theory of human-like few-shot learning from von-Neuman-Landauer's principle. modelling human learning is difficult as how people learn varies from one to another. Under commonly accepted definitions, we prove that all human or animal few-shot learning, and major models including Free Energy Principle and Bayesian Program Learning that model such learning, approximate our theory, under Church-Turing thesis. We find that deep generative model like variational autoencoder (VAE) can be used to approximate our theory and perform significantly better than baseline models including deep neural networks, for image recognition, low resource language processing, and character recognition.
translated by 谷歌翻译
When using LiDAR semantic segmentation models for safety-critical applications such as autonomous driving, it is essential to understand and improve their robustness with respect to a large range of LiDAR corruptions. In this paper, we aim to comprehensively analyze the robustness of LiDAR semantic segmentation models under various corruptions. To rigorously evaluate the robustness and generalizability of current approaches, we propose a new benchmark called SemanticKITTI-C, which features 16 out-of-domain LiDAR corruptions in three groups, namely adverse weather, measurement noise and cross-device discrepancy. Then, we systematically investigate 11 LiDAR semantic segmentation models, especially spanning different input representations (e.g., point clouds, voxels, projected images, and etc.), network architectures and training schemes. Through this study, we obtain two insights: 1) We find out that the input representation plays a crucial role in robustness. Specifically, under specific corruptions, different representations perform variously. 2) Although state-of-the-art methods on LiDAR semantic segmentation achieve promising results on clean data, they are less robust when dealing with noisy data. Finally, based on the above observations, we design a robust LiDAR segmentation model (RLSeg) which greatly boosts the robustness with simple but effective modifications. It is promising that our benchmark, comprehensive analysis, and observations can boost future research in robust LiDAR semantic segmentation for safety-critical applications.
translated by 谷歌翻译
In recent years, arbitrary image style transfer has attracted more and more attention. Given a pair of content and style images, a stylized one is hoped that retains the content from the former while catching style patterns from the latter. However, it is difficult to simultaneously keep well the trade-off between the content details and the style features. To stylize the image with sufficient style patterns, the content details may be damaged and sometimes the objects of images can not be distinguished clearly. For this reason, we present a new transformer-based method named STT for image style transfer and an edge loss which can enhance the content details apparently to avoid generating blurred results for excessive rendering on style features. Qualitative and quantitative experiments demonstrate that STT achieves comparable performance to state-of-the-art image style transfer methods while alleviating the content leak problem.
translated by 谷歌翻译
Existing measures and representations for trajectories have two longstanding fundamental shortcomings, i.e., they are computationally expensive and they can not guarantee the `uniqueness' property of a distance function: dist(X,Y) = 0 if and only if X=Y, where $X$ and $Y$ are two trajectories. This paper proposes a simple yet powerful way to represent trajectories and measure the similarity between two trajectories using a distributional kernel to address these shortcomings. It is a principled approach based on kernel mean embedding which has a strong theoretical underpinning. It has three distinctive features in comparison with existing approaches. (1) A distributional kernel is used for the very first time for trajectory representation and similarity measurement. (2) It does not rely on point-to-point distances which are used in most existing distances for trajectories. (3) It requires no learning, unlike existing learning and deep learning approaches. We show the generality of this new approach in three applications: (a) trajectory anomaly detection, (b) anomalous sub-trajectory detection, and (c) trajectory pattern mining. We identify that the distributional kernel has (i) a unique data-dependent property and the above uniqueness property which are the key factors that lead to its superior task-specific performance; and (ii) runtime orders of magnitude faster than existing distance measures.
translated by 谷歌翻译
With the increasing ability of large language models (LLMs), in-context learning (ICL) has become a new paradigm for natural language processing (NLP), where LLMs make predictions only based on contexts augmented with a few training examples. It has been a new trend exploring ICL to evaluate and extrapolate the ability of LLMs. In this paper, we aim to survey and summarize the progress, challenges, and future work in ICL. We first present a formal definition of ICL and clarify its correlation to related studies. Then, we organize and discuss advanced techniques of ICL, including training strategies, prompting strategies, and so on. Finally, we present the challenges of ICL and provide potential directions for further research. We hope our work can encourage more research on uncovering how ICL works and improving ICL in future work.
translated by 谷歌翻译